Humans aren't just not perfect Bayesians. Very very few of us are even Bayesian wannabes. In essence, everyone who thinks that it is more moral/ethical to hold some proposition than to hold it's converse is taking some criterion other than appearent truth as normative with respect to the evaluation of beliefs.
This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?
It's a meta-level/aliasing sort of problem, I think. You don't believe it's more ethical/moral to believe any specific proposition, you believe it's more ethical/moral to believe 'the proposition most likely to be true', which is a variable which can be filled with whatever proposition the situation suggests, so it's a different class of thing. Effectively it's equivalent to 'taking apparent truth as normative', so I'd call it the only position of that format that is Bayesian.
Hmm... thanks for writing this. I just realized that I may resemble your argumentative friend in some ways. I should bookmark this.
Stanovich's "dysrationalia" sense of stupidity is one of my greatest fears.
I didn't know whether to post this reply to "Black swans from the future" or here, so I'll just reference it:
http://www.overcomingbias.com/2007/04/black_swans_fro.html#comment-65404590
Good post, Eliezer.
I've pointed before to this very good review of Philip Tetlock's book, Expert Political Judgment. The review describes the results of Tetlock's experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.
Even after going over the many ways the experts failed in detail, and even though the review is titled "Everybody’s An Expert", the reviewer concludes, "But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself."
Does that make sense, though? Think for yourself? If you've just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others' failures to appreciation of your own.
There's a better counterargument than that in Tetlock - one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.
Hal, to be precise, the bias is generalizing from knowledge of others' failures to skepticism about disliked conclusions, but failing to generalize to skepticism about preferred conclusions or one's own conclusions. That is, the error is not absence of generalization, but imbalance of generalization, which is far deadlier. I do agree with you that the reviewer's conclusion is not supported (to put it mildly) by the evidence under review.
So why, then, is this blog not incorporating more statistical and collective de-biasing mechanisms? There are some out-of-the-box web widgets and mildly manual methods to incorporate that would at the very least provide new grist for the discussion mill.
The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, where the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.
I would love to hear more about such methods, Rafe. This blog tends to be a somewhat abstract and "meta" but I would like to do more case studies on specific issues and look at how we could come to a less biased view of the truth. I did a couple of postings on the "Peak Oil" controversy a few months ago along these lines.
Rafe, name three.
Rooney, I don't disagree that this would be a mistake, but in my experience the balance of evidence is very rarely exactly even - because hypotheses have inherent penalties for complexity. Where there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it, not suspend judgment. The only cases I can think of where I suspend judgment are binary or small discrete hypothesis spaces, like "Was it murder or suicide?", or matters like the anthropic principle, where there is no null hypothesis to take refuge in, and any position is attackable.
I have also had repeated encounters with individuals who take the bias literature to provide 'equal and opposite biases' for every situation, and take this as reason to continue to hold their initial beliefs. The situation is reminiscent of many economic discussions, where bright minds question whether the effect of a change on some quantity will be positive, negative or ambiguous. The discussants eagerly search for at least one theoretical effect that could move the quantity in a positive direction, one that could move it in the negative, and then declare the effect ambiguous after demonstrating their cleverness, without evaluating the actual size of the opposed effects.
I would recommend that when we talk about opposed biases, at least those for which there is an experimental literature, we should give rough indications of their magnitudes to discourage our audiences from utilizing the 'it's all a wash' excuse to avoid analysis.
As someone who seems to have "thrown the kitchen sink" of cognitive biases at the free will problem, I wonder if I've suffered from this meta-bias myself. I find only modest reassurance in the facts that: (i) others have agreed with me and (ii) my challenge for others to find biases that would favor disbelief in free will has gone almost entirely unanswered.
But this is a good reminder that one can get carried away...
Eliezer, I agree that exactly even balances of evidence are rare. However, I would think suspending judgment to be rational in many situations where the balance of evidence is not exactly even. For example, if I roll a die, it would hardly be rational to believe "it will not come up 5 or 6", despite the balance of evidence being in favor of such a belief. If you are willing to make >50% the threshold of rational belief, you will hold numerous false and contradictory beliefs.
Also, I have some doubt about your claim that when "there is n...
However, if you were to wager on whether or not the dice will come up as either 5 or 6, the only rational position is to bet against it.
You need to specify even odds. Bayesians will bet on just about anything if the price is right.
"Nonetheless, it would not be correct for Archimedes to conclude that Bell's theorem is therefore false."
I think this is a terrible hypothetical to use to illuminate your point, since most of Archimedes' decision would be based on how much evidence is proper to give to the source of information he gets the theorem from. I would say that, for any historically plausible mechanism, he'd certainly be correct in rejecting it.
Rooney, where there isn't any evidence, then indeed it may be appropriate to suspend judgment over a large hypothesis space, which indeed is not the same as being able to justifiably adopt a random such judgment - anyone who wants to assign more than default probability mass is being irrational.
I concur that Bell's theorem is a terrible hypothetical, because the whole point is that, in real life, without evidence, there's absolutely no way for Archimedes to just accidentally hit on Bell's theorem - in his lifetime he will not reach that part of the search ...
Eliezer, I think we are misunderstanding each other, possibly merely about terminology.
When you (and pdf) say "reject", I am taking you to mean "regard as false". I may be mistaken about that.
I would hope that you don't mean that, for if so, your claim that "no evidence in favor -> almost always false" seems bound to lead to massive errors. For example, you have no evidence in favor of the claim "Rooney has string in his pockets". But you wouldn't on such grounds aver that such a claim is almost certainly false...
The probability that an arbitrary person has string in their pockets (given that they're wearing pockets at the time) is knowable, and given no other information we could say that it's X%. The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true. (Unless we get other evidence to the contrary--and the fact that someone made the claim might be evidence here.)
Say X is 3%. Then I should say that Rooney very likely has no string in his pockets. Say X were 50%. Then I should say that there...
Pdf, maybe you're referring to "I Don't Know"?
Rooney, I think you're interpreting "reject" as "state with certainty that it is not true" or "behave as if there is definite evidence against it". Whereas what I mean is that one should bet at odds that are tiny or even infinitesimal when dealing with an evidentially unsupported belief in a very large search space. You have no choice but to deal this way with the vast majority of such beliefs if you want your total probabilities to sum to 1.
By "suspending judgment" I mean neither accepting a claim as true, nor rejecting it as false. Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself. So, pdf, when you say "The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true", where X is unknown, I don't see how this is materially different from saying "I don't know if Rooney has string in his pockets", which is to say tha...
You have no choice but to bet at some odds. Life is about action, action is about expected utility, and expected utility demands that you assign some subjective weighting to outcomes based on how likely they are. Walking down the street, I offer to bet you a million dollars against one dollar that a stranger has string in their pockets. Do you take the bet? Whether you say yes or no, you've just made a statement of probability. The null action is also an action. Refusing to bet is like refusing to allow time to pass.
Nor do I permit probabilities of zero and one. All belief is belief of probability.
I have to bet on every possible claim I (or any sentient entity capable of propositional attitudes in the universe) might entertain as a belief? That is highly implausible as a descriptive claim. Consider the claim "Xinwei has string in his pockets" (where Xinwei is a Chinese male I've never met). I have no choice but to assign probability to that claim? And all other claims, from "language is the house of being" to "a proof for Goldbach's conjecture will be found by an unaided human mind"? If Eliezer offers me a million ...
Michael Rooney: I don't think Eliezer is saying that it's invalid to say "I don't know." He's saying it's invalid to have as your position "I should not have a position."
The analogy of betting only means that every action you take will have consequences. For example, the decision not to try to assign a probability to the statement that Xinwei has a string in his pocket will have some butterfly effect. You have recognized this, and have also recognized that you don't care, and have taken the position that it doesn't matter. The key here is that, as you admit, you have taken a position.
And now that we know that we're going to be more biased. Why'd you have to say that?
"Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases."
Well, what about that always taking on the strongest opponent and the strongest arguments business? ;)
Actually, when I see a fellow with third degree in Philosophy, I leave him for someone, who'll have a similiar degree. It isn't that Sorbonne initiates are hopeless, it's arguments with 'em, that really are (hopeless).
"Things will continue historically as they have" is in some contexts hardly the worst thing you could assume, particularly when the alternative is relying on expert advice that a) is from people who historically have not had skill at predicting things and b) are making predictions reliant on complex ideas that you're in no position to personally evaluate.
I think I've got a pretty good feeling on those 6 predictions and have seen them in action numerous times. Most especially in discussions on religion. Does the following seem about right LWers?
The prior attitude effect, both atheists and theists have prior strong feelings of their respective positions and many of them tend to evaluate their supportive arguments more favourably, whilst also aggressively attacking counters to their arguments as predicted by the disconfirmation bias.
The internet being what it is, provides a ready source of material to confi...
The link to the paper is dead. I found a copy here: Taber & Lodge (2006).
As far as I can tell, there have been few other studies which demonstrate the sophistication effect. One new study on this is West et al. (forthcoming), "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot."
Here is the abstract:
...The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in our own judgments. Most previous research on the bias blind spot has focu
"For a true Bayesian, information would never have negative expected utility". I'm probably being a technicality bitch, attacking an unintended interpretation, but I can see bland examples of this being false if taken literally: A robot scans people to see how much knowledge they have and harms them more if they have more knowledge, leading to a potential for negative utility given more knowledge.
"For a true Bayesian, information would never have negative expected utility."
Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.
Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it's possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since t...
I Think is a good thing to be Humble to yourself, not to ague with yourself. if you you are always in self-doubt, you never speak out and learn. If you don't hear yourself, only how 'smart' you sound, you never learn from your mistakes. I try to learn from my - and other's- mistakes but I think observation of yourself is truly the key to being a rationalist, to remove self-imposed blocks on the path of understanding.
I Think it is great that you have such real-life experience, and have the courage to try. Keep living, learning and trying!
(I know this might be off-topic, but this is my first post and I don't know where to start, so i posted somewhere that inspired me to write.)
On a related note to such despicable people; I just had a few minutes talk with a very old friend on mine who matched this description. I just wanted an update on his situation and see if the boundless rage and annoyance I experienced then still fit. It's not super relevant, but the exact moment i started writing to him, my hands started shaking and i could feel a pressure on my chest, and my mind started clouding over. It's probably something that's shot into my system, but the exact reason why and what i dont know. Do any of you happen to know about this...
I fear that the most common context in which people learn about cognitive biases is also the most detrimental. That is, they're arguing about something on the internet and someone, within the discussion, links them an article or tries to lecture them about how they really need to learn more about cognitive biases/heuristics/logical fallacies etc.. What I believe commonly happens then is that people realise that these things can be weapons; tools to get the satisfaction of "winning". I really wish everyone would just learn this in some neutral con...
THIS is the proper use of humility. I hope I'm less of a fanatic and more tempered in my beliefs in the future.
It seems to me like this is as intended. Most people who talk about biases and fallacies do so in the veil of them being wrong and bad, instead of mere tools, more or less sophisticated and consciously knowable. I am skeptical about what good argument and reasoning entails and whether any such single instance exists.
For a salient example, look no further than the politics board of 4chan. Stickied for the last five years is a list of 24 logical fallacies. Unfortunately, this doesn't seem to dissuade the conspiratorial ramblings, but rather, lends an appearance of sophistication to their arguments for anyone unfamiliar with the subject. It's how you get otherwise curious and bright 15 year olds parroting anti-semitic rhetoric.
Once upon a time I tried to tell my mother about the problem of expert calibration, saying: “So when an expert says they’re 99% confident, it only happens about 70% of the time.” Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: “Of course, you’ve got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—”
And my mother said: “Are you kidding? This is great! I’m going to use it all the time!”
Taber and Lodge’s “Motivated Skepticism in the Evaluation of Political Beliefs” describes the confirmation of six predictions:
If you’re irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.
I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.
You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you’d be helping them—making them more effective rationalists—if you just told them about a list of classic biases?
I recall someone who learned about the calibration/overconfidence problem. Soon after he said: “Well, you can’t trust experts; they’re wrong so often—as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—” and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.
I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn’t like, he accused me of being a sophisticated arguer. He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.
Even the notion of a “sophisticated arguer” can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don’t like.
I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.
I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology’s sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn’t hear them at all.
Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!